17 research outputs found

    Exploiting the architectural characteristics of software components to improve software reuse

    Get PDF
    PhD ThesisSoftware development is a costly process for all but the most trivial systems. One of the commonly known ways of minimizing development costs is to re-use previously built software components. However, a significant problem that source-code re-users encounter is the difficulty of finding components that not only provide the functionality they need but also conform to the architecture of the system they are building. To facilitate finding reusable components there is a need to establish an appropriate mechanism for matching the key architectural characteristics of the available source-code components against the characteristics of the system being built. This research develops a precise characterization of the architectural characteristics of source-code components, and investigates a new way to describe how appropriate components for re-use can be identified and categorized.Umm Al- Qura University

    An experimental study of a fuzzy adaptive emperor penguin optimizer for global optimization problem

    Get PDF
    Emperor Penguin Optimizer (EPO) is a recently developed population-based meta-heuristic algorithm that simulates the huddling behavior of emperor penguins. Mixed results have been observed on the performance of EPO in solving general optimization problems. Within the EPO, two parameters need to be tuned (namely f and l ) to ensure a good balance between exploration (i.e., roaming unknown locations) and exploitation (i.e., manipulating the current known best). Since the search contour varies depending on the optimization problem, the tuning of f and l is problem-dependent, and there is no one-size-fits-all approach. To alleviate these problems, an adaptive mechanism can be introduced in EPO. This paper proposes a fuzzy adaptive variant of EPO, namely Fuzzy Adaptive Emperor Penguin Optimizer (FAEPO), to solve this problem. As the name suggests, FAEPO can adaptively tune the parameters f and l throughout the search based on three measures (i.e., quality, success rate, and diversity of the current search) via fuzzy decisions. A test suite of twelve optimization benchmark test functions and three global optimization problems (Team Formation Optimization - TFO, Low Autocorrelation Binary Sequence - LABS, and Modified Condition/Decision Coverage - MC/DC test case generation) were solved using the proposed algorithm. The respective solution results of the benchmark meta-heuristic algorithms were compared. The experimental results demonstrate that FAEPO significantly improved the performance of its predecessor (EPO) and gives superior performance against the competing meta-heuristic algorithms, including an improved variant of EPO (IEPO)

    An efficient data replication technique with fault tolerance approach using BVAG with checkpoint and rollback-recovery

    Get PDF
    Data replication has been one of the pathways for distributed database management as well as computational intelligence scheme as it continues to improve data access and reliability. The performance of data replication technique can be crucial when it involves failure interruption. In order to develop a more efficient data replication technique which can cope with failure, a fault tolerance approach needs to be applied in the data replication transaction. Fault tolerance approach is a core issue for a transaction management as it preserves an operation mode transaction prone to failure. In this study, a data replication technique known as Binary Vote Assignment on Grid (BVAG) has been combined with a fault tolerance approach named as Checkpoint and Rollback-Recovery (CR) to evaluate the effectiveness of applying fault tolerance approach in a data replication transaction. Binary Vote Assignment on Grid with Checkpoint and Rollback-Recovery Transaction Manager (BVAGCRTM) is used to run the BVAGCR proposed method. The performance of the proposed BVAGCR is compared to standard BVAG in terms of total execution time for a single data replication transaction. The experimental results reveal that BVAGCR improves the BVAG total execution time in failure environment of about 31.65% by using CR fault tolerance approach. Besides improving the total execution time of BVAG, BVAGCR also reduces the time taken to execute the most critical phase in BVAGCRTM which is Update (U) phase by 98.82%. Therefore, based on the benefits gained, BVAGCR is recommended as a new and efficient technique to obtain a reliable performance of data replication with failure condition in distributed databases

    Stochastic process and tutorial of the African bufalo optimization

    Get PDF
    This paper presents the data description of the African buffalo optimization algorithm (ABO). ABO is a recently-designed optimization algorithm that is inspired by the migrant behaviour of African buffalos in the vast African landscape. Organizing their large herds that could be over a thousand buffalos using just two principal sounds, the /maaa/ and the /waaa/ calls present a good foundation for the development of an optimization algorithm. Since elaborate descriptions of the manual workings of optimization algorithms are rare in literature, this paper aims at solving this problem, hence it is our main contribution. It is our belief that elaborate manual description of the workings of optimization algorithms make it user-friendly and encourage reproducibility of the experimental procedures performed using this algorithm. Again, our ability to describe the algorithm’s basic flow, stochastic and data generation processes in a language so simple that any non-expert can appreciate and use as well as the practical implementation of the popular benchmark Rosenbrock and Shekel Foxhole functions with the novel algorithm will assist the research community in benefiting maximally from the contributions of this novel algorithm. Finally, benchmarking the good experimental output of the ABO with those of the popular, highly effective and efficient Cuckoo Search and Flower Pollination Algorithm underscores the ABO as a worthy contribution to the existing body of population-based optimization algorithm

    Ecological and confined domain ontology construction scheme using concept clustering for knowledge management

    Get PDF
    Knowledge management in a structured system is a complicated task that requires common, standardized methods that are acceptable to all actors in a system. Ontology, in this regard, is a primary element and plays a central role in knowledge management, interoperability between various departments, and better decision making. The ontology construction for structured systems comprises logical and structural complications. Researchers have already proposed a variety of domain ontology construction schemes. However, these schemes do not involve some important phases of ontology construction that make ontologies more collaborative. Furthermore, these schemes do not provide details of the activities and methods involved in the construction of an ontology, which may cause difficulty in implementing the ontology. The major objectives of this research were to provide a comparison between some existing ontology construction schemes and to propose an enhanced ecological and confined domain ontology construction (EC-DOC) scheme for structured knowledge management. The proposed scheme introduces five important phases to construct an ontology, with a major focus on the conceptualizing and clustering of domain concepts. In the conceptualization phase, a glossary of domain-related concepts and their properties is maintained, and a Fuzzy C-Mean soft clustering mechanism is used to form the clusters of these concepts. In addition, the localization of concepts is instantly performed after the conceptualization phase, and a translation file of localized concepts is created. The EC-DOC scheme can provide accurate concepts regarding the terms for a specific domain, and these concepts can be made available in a preferred local language

    Identifying critical dimensions for project success in R&D public sector using Delphi study and validation techniques

    Get PDF
    © (2020) The Authors. Published by IEEE. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1109/ACCESS.2021.3112112In the current century, organizations face ever increasing dynamic ecosystems and are constantly devising strategies to meet their challenges. These include the implementation of the right organizational structure and avoid project schedule delays to achieve projects’ success. Unfortunately, the classification of significant project success dimensions in the R&D public sector environment is still an elusive concept. This study adopts a multi-dimensional qualitative and quantitative approach to explore the critical dimensions of organizational structure and schedule management that enhance or hinder the project success in R&D of public sector organizations. In Phase 1, a Delphi Study is conducted, and results of reliability and other tests are the input of Phase 2. On the basis of these tests, variables have been selected for the next phase or final questionnaire. In Phase 2, through a survey of 285 responses in a public sector R&D environment, the proposed framework is validated by conducting face, content and construct validity. The results indicated that formalization, specialization, differentiation, coordination mechanism, decentralization and authority of managers have a significant effect on the schedule management and successful execution of R&D projects; whereas, centralization and departmentalization do not correlate strongly. The results also imply that decentralized organizational structures (organic) are more preferable than centralized structures (mechanistic) for the execution of R&D projects when proposed timelines are to be met timely. The proposed framework will act as a supporting mechanism for engineering managers to deal with organizational structure and schedule management factors in a highly uncertain R&D environment where projects deviate frequently from their anticipated timeline

    Industry 4.0: Architecture and equipment revolution

    Get PDF
    © 2020 The Authors. Published by Tech Science Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://www.techscience.com/cmc/v66n2/40642The development of science and technology has led to the era of Industry 4.0. The core concept is the combination of "material and informationization." In the supply chain and manufacturing process, the "material" of the physical entity world is realized by data, identity, intelligence, and information. Industry 4.0 is a disruptive transformation and upgrade of intelligent industrialization based on the Internet-of-Things and Big Data in traditional industrialization. The goal is “maximizing production efficiency, minimizing production costs, and maximizing the individual needs of human beings for products and services.” Achieving this goal will surely bring about a major leap in the history of the industry, which will lead to the "Fourth Industrial Revolution.” This paper presents a detailed discussion of industrial big data, strategic roles, architectures, characteristics, and four types of innovative business models that can generate profits for enterprises. The key revolutionary aspect of Industry 4.0 is explained, which is the equipment revolution. Six important attributes of equipment are explained under the Industry 4.0 perspective.Authors would like to thank deanship of scientific research (DSR) at umm Al-Qura University for their partial funding the work (Grant# 17-COM-1-01-0007)

    Applications of ontology in the Internet of Things: a systematic analysis

    Get PDF
    Ontology has been increasingly implemented to facilitate the Internet of Things (IoT) activities, such as tracking and information discovery, storage, information exchange, and object addressing. However, a complete understanding of using ontology in the IoT mechanism remains lacking. The main goal of this research is to recognize the use of ontology in the IoT process and investigate the services of ontology in IoT activities. A systematic literature review (SLR) is conducted using predefined protocols to analyze the literature about the usage of ontologies in IoT. The following conclusions are obtained from the SLR. (1) Primary studies (i.e., selected 115 articles) have addressed the need to use ontologies in IoT for industries and the academe, especially to minimize interoperability and integration of IoT devices. (2) About 31.30% of extant literature discussed ontology development concerning the IoT interoperability issue, while IoT privacy and integration issues are partially discussed in the literature. (3) IoT styles of modeling ontologies are diverse, whereas 35.65% of total studies adopted the OWL style. (4) The 32 articles (i.e., 27.83% of the total studies) reused IoT ontologies to handle diverse IoT methodologies. (5) A total of 45 IoT ontologies are well acknowledged, but the IoT community has widely utilized none. An in-depth analysis of different IoT ontologies suggests that the existing ontologies are beneficial in designing new IoT ontology or achieving three main requirements of the IoT field: interoperability, integration, and privacy. This SLR is finalized by identifying numerous validity threats and future directions

    A Tabu Search hyper-heuristic strategy for t-way test suite generation

    Get PDF
    This paper proposes a novel hybrid t-way test generation strategy (where t indicates interaction strength), called High Level Hyper-Heuristic (HHH). HHH adopts Tabu Search as its high level meta-heuristic and leverages on the strength of four low level meta-heuristics, comprising of Teaching Learning based Optimization, Global Neighborhood Algorithm, Particle Swarm Optimization, and Cuckoo Search Algorithm. HHH is able to capitalize on the strengths and limit the deficiencies of each individual algorithm in a collective and synergistic manner. Unlike existing hyper-heuristics, HHH relies on three defined operators, based on improvement, intensification and diversification, to adaptively select the most suitable meta-heuristic at any particular time. Our results are promising as HHH manages to outperform existing t-way strategies on many of the benchmarks
    corecore